28 research outputs found

    Evaluating 3D pointing techniques

    Get PDF
    "This dissertation investigates various issues related to the empirical evaluation of 3D pointing interfaces. In this context, the term ""3D pointing"" is appropriated from analogous 2D pointing literature to refer to 3D point selection tasks, i.e., specifying a target in three-dimensional space. Such pointing interfaces are required for interaction with virtual 3D environments, e.g., in computer games and virtual reality. Researchers have developed and empirically evaluated many such techniques. Yet, several technical issues and human factors complicate evaluation. Moreover, results tend not to be directly comparable between experiments, as these experiments usually use different methodologies and measures. Based on well-established methods for comparing 2D pointing interfaces this dissertation investigates different aspects of 3D pointing. The main objective of this work is to establish methods for the direct and fair comparisons between 2D and 3D pointing interfaces. This dissertation proposes and then validates an experimental paradigm for evaluating 3D interaction techniques that rely on pointing. It also investigates some technical considerations such as latency and device noise. Results show that the mouse outperforms (between 10% and 60%) other 3D input techniques in all tested conditions. Moreover, a monoscopic cursor tends to perform better than a stereo cursor when using stereo display, by as much as 30% for deep targets. Results suggest that common 3D pointing techniques are best modelled by first projecting target parameters (i.e., distance and size) to the screen plane.

    RotoSwype : word-gesture typing using a ring

    Get PDF
    Funding: NSERC Discovery Grant #2018-05187, the Canada Foundation for Innovation Infrastructure Fund “Facility for Fully Interactive Physio-digital Spaces” (#33151), and Ontario Early Researcher Award #ER16-12-184.We propose RotoSwype, a technique for word-gesture typing using the orientation of a ring worn on the index finger. RotoSwype enables one-handed text-input without encumbering the hand with a device, a desirable quality in many scenarios, including virtual or augmented reality. The method is evaluated using two arm positions: with the hand raised up with the palm parallel to the ground; and with the hand resting at the side with the palm facing the body. A five-day study finds both hand positions achieved speeds of at least 14 words-per-minute (WPM) with uncorrected error rates near 1%, outperforming previous comparable techniques.Postprin

    A head coupled cursor for 2D selection in virtual reality

    No full text
    We present a head-coupled cursor to support 2D selection in HMDs. The head-coupled cursor can be used with any 2DOF input source and unlike other head-based selection methods, the cursor can be moved independently within the screen plane of the HMD, while remaining in the HMD’s field of view (FOV). We propose an experiment to evaluate the head-coupled cursor in future work

    VR collide! Comparing collision-avoidance methods between colocated virtual reality users

    No full text
    We present a pilot study comparing visual feedback mechanisms for preventing physical collisions between co-located VR users. These include Avatar (a 3D avatar in co-located with the other user), BoundingBox (similar to HTC's "chaperone"), and CameraOverlay (live video feed overlaid on the virtual environment). Using a simulated second user, we found that CameraOverlay and Avatar had the fastest travel time around an obstacle, but BoundingBox had the fewest collisions at 0.07 collision/trial versus 0.2 collisions/trial for Avatar and 0.4 collisions/trial for CameraOverlay. However, subjective participant impressions strongly favoured Avatar and CameraOverlay over BoundingBox. Based on these results, we propose future studies on hybrid methods combining the best aspects of Avatar (speed, user preference) and BoundingBox (safety)

    Camera-based selection with cardboard HMDs

    No full text
    We present a study of selection techniques for low-cost mobile VR devices, such as Google Cardboard, using the outward facing camera on modern smartphones. We compared three selection techniques, air touch, head ray, and finger ray. Initial evaluation indicates that hand-based selection (air touch) was the worst. A ray cast using the tracked finger position offered much higher selection performance. Our results suggest that camera-based mobile tracking is feasible with ray-based techniques

    EZCursorVR: 2D selection with virtual reality head-mounted displays

    No full text
    We present an evaluation of a new selection technique for virtual reality (VR) systems presented on head-mounted displays. The technique, dubbed EZCursorVR, presents a 2D cursor that moves in a head-fixed plane, simulating 2D desktop-like cursor control for VR. The cursor can be controlled by any 2DOF input device, but also works with 3/6DOF devices using appropriate mappings. We conducted an experiment based on ISO 9241-9, comparing the effectiveness of EZCursorVR using a mouse, a joystick in both velocity-control and position-control mappings, a 2D-constrained ray-based technique, a standard 3D ray, and finally selection via head motion. Results indicate that the mouse offered the highest performance in terms of throughput, movement time, and error rate, while the position-control joystick was worst. The 2D-constrained ray-casting technique proved an effective alternative to the mouse when performing selections using EZCursorVR, offering better performance than standard ray-based selection

    Viewpoint snapping to reduce cybersickness in virtual reality

    No full text
    Cybersickness in virtual reality (VR) is an on-going problem, despite recent advances in technology. In this paper, we propose a method for reducing the likelihood of cybersickness onset when using stationary (e.g., seated) VR setups. Our approach relies on reducing optic flow via inconsistent displacement - the viewpoint is "snapped" during fast movement that would otherwise induce cybersickness. We compared our technique, which we call viewpoint snapping, to a control condition without viewpoint snapping, in a custom-developed VR first-person shooter game. We measured participant cybersickness levels via the Simulator Sickness Questionnaire (SSQ), and user reported levels of nausea, presence, and objective error rate. Overall, our results indicate that viewpoint snapping significantly reduced SSQ reported cybersickness levels by about 40% and resulted in a reduction in participant nausea levels, especially with longer VR exposure. Presence levels and error rate were not significantly different between the viewpoint snapping and the control condition

    Head vs. Eye-based selection in virtual reality

    No full text
    This demo presents a VR system for comparing eye and head-based selection performance using the recently released FOVE. The system presents a virtual environment modelled after the ISO 9241-9 reciprocal selection task, with targets presented at varying depths. We have used the system to compare eye-based selection, and head-based selection (i.e., gaze direction) in isolation, and a third condition which used both eye-tracking and head-tracking at once

    Player performance with different input devices in virtual reality first-person shooter games

    No full text
    First-person shooter (FPS) games are a competitive game genre. Players of these games commonly try to maximize their performance through using a better input device. Numerous previous studies have analyzed different game controllers (see e.g., [1]). Tracked input devices such as the Hydra offer some advantages over desktop input devices in VR FPS games. We thus hypothesize that VR controllers will offer substantially better performance than both the mouse and gamepad in first-person shooter targeting, due to the improved naturalness of control. Our study compared 3D selection performance between the mouse, 3D tracker, and game controller in a head-mounted display VR context
    corecore